195 research outputs found

    Optimal Nested Test Plan for Combinatorial Quantitative Group Testing

    Full text link
    We consider the quantitative group testing problem where the objective is to identify defective items in a given population based on results of tests performed on subsets of the population. Under the quantitative group testing model, the result of each test reveals the number of defective items in the tested group. The minimum number of tests achievable by nested test plans was established by Aigner and Schughart in 1985 within a minimax framework. The optimal nested test plan offering this performance, however, was not obtained. In this work, we establish the optimal nested test plan in closed form. This optimal nested test plan is also order optimal among all test plans as the population size approaches infinity. Using heavy-hitter detection as a case study, we show via simulation examples orders of magnitude improvement of the group testing approach over two prevailing sampling-based approaches in detection accuracy and counter consumption. Other applications include anomaly detection and wideband spectrum sensing in cognitive radio systems

    Cooperative Peer-to-Peer Repair for Wireless Multimedia Broadcast

    Get PDF
    Abstract — This paper explores how to leverage IEEE802.11based cooperative peer-to-peer repair (CPR) to enhance the reliability of wireless multimedia broadcasting. We first formulate the CPR problem and present an algorithm that assumes global state information to optimally schedule CPR transmissions. Based on insights gained from the optimal algorithm, we propose a fully distributed CPR (DCPR) protocol. Simulation results demonstrate that the DCPR protocol can effectively enhance the reliability of wireless broadcast services with a repair latency comparable to that of optimal scheduling. I

    Differentially Private Generative Adversarial Networks with Model Inversion

    Full text link
    To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Fr\'echet Inception Distance, and classification accuracy under the same privacy guarantee.Comment: Best Student Paper Award of 13th IEEE International Workshop on Information Forensics and Security (WIFS 2021), Montpellier, Franc

    Contextual Localization Through Network Traffic Analysis

    Get PDF
    opportunitiesforcontentserviceproviderstooptimizethecontent delivery based on user’s location. Since sharing precise location remainsamajorprivacyconcernamongtheusers,manylocationbased services rely on contextual location (e.g. residence, cafe etc.) as opposed to acquiring user’s exact physical location. In this paper, we present PACL (Privacy-Aware Contextual Localizer), which can learn user’s contextual location just by passively monitoring user’s network traffic. PACL can discern a set of vital attributes (statistical and application-based) from user’s network traffic, and predict user’s contextual location with a very high accuracy.WedesignandevaluatePACLusingreal-worldnetwork traces of over 1700 users with over 100 gigabytes of total data. OurresultsshowthatPACL(builtusingdecisiontree)canpredict user’s contextual location with the accuracy of around 87%. I

    Minimizing L1 over L2 norms on the gradient

    Full text link
    In this paper, we study the L1/L2 minimization on the gradient for imaging applications. Several recent works have demonstrated that L1/L2 is better than the L1 norm when approximating the L0 norm to promote sparsity. Consequently, we postulate that applying L1/L2 on the gradient is better than the classic total variation (the L1 norm on the gradient) to enforce the sparsity of the image gradient. To verify our hypothesis, we consider a constrained formulation to reveal empirical evidence on the superiority of L1/L2 over L1 when recovering piecewise constant signals from low-frequency measurements. Numerically, we design a specific splitting scheme, under which we can prove subsequential and global convergence for the alternating direction method of multipliers (ADMM) under certain conditions. Experimentally, we demonstrate visible improvements of L1/L2 over L1 and other nonconvex regularizations for image recovery from low-frequency measurements and two medical applications of MRI and CT reconstruction. All the numerical results show the efficiency of our proposed approach.Comment: 26 page

    Benchmarking Adversarial Robustness of Compressed Deep Learning Models

    Full text link
    The increasing size of Deep Neural Networks (DNNs) poses a pressing need for model compression, particularly when employed on resource constrained devices. Concurrently, the susceptibility of DNNs to adversarial attacks presents another significant hurdle. Despite substantial research on both model compression and adversarial robustness, their joint examination remains underexplored. Our study bridges this gap, seeking to understand the effect of adversarial inputs crafted for base models on their pruned versions. To examine this relationship, we have developed a comprehensive benchmark across diverse adversarial attacks and popular DNN models. We uniquely focus on models not previously exposed to adversarial training and apply pruning schemes optimized for accuracy and performance. Our findings reveal that while the benefits of pruning enhanced generalizability, compression, and faster inference times are preserved, adversarial robustness remains comparable to the base model. This suggests that model compression while offering its unique advantages, does not undermine adversarial robustness

    Fast Local Rerouting for Handling Transient Link Failures

    Full text link
    • …
    corecore